14 research outputs found

    Appearance modeling for persistent object tracking in wide-area and full motion video

    Get PDF
    Object tracking is a core element of computer vision and autonomous systems. As such single and multiple object tracking has been widely investigated especially for full motion video sequences. The acquisition of wide-area motion imagery (WAMI) from moving airborne platforms is a much more recent sensor innovation that has an array of defense and civilian applications with numerous opportunities for providing a unique combination of dense spatial and temporal coverage unmatched by other sensor systems. Airborne WAMI presents a host of challenges for object tracking including large data volume, multi-camera arrays, image stabilization, low resolution targets, target appearance variability and high background clutter especially in urban environments. Time varying low frame rate large imagery poses a range of difficulties in terms of reliable long term multi-target tracking. The focus of this thesis is on the Likelihood of Features Tracking (LOFT) testbed system that is an appearance based (single instance) object tracker designed specifcally for WAMI and follows the track before detect paradigm. The motivation for tracking using dynamics before detecting was so that large scale data can be handled in an environment where computational cost can be kept at a bare minimum. Searching for an object everywhere on a large frame is not practical as there are many similar objects, clutter, high rise structures in case of urban scenes and comes with the additional burden of greatly increased computational cost. LOFT bypasses this difficulty by using filtering and dynamics to constrain the search area to a more realistic region within the large frame and uses multiple features to discern objects of interest. The objects of interest are expected as input in the form of bounding boxes to the algorithm. The main goal of this work is to present an appearance update modeling strategy that fits LOFT's track before detect paradigm and to showcase the accuracy of the overall system as compared with other state of the art tracking algorithms and also with and without the presence of this strategy. The update strategy using various information cues from the Radon Transform was designed with certain performance parameters in mind such as minimal increase in computational cost and a considerable increase in precision and recall rates of the overall system. This has been demonstrated with supporting performance numbers using standard evaluation techniques as in literature. The extensions of LOFT WAMI tracker to include a more detailed appearance model with an update strategy that is well suited for persistent target tracking is novel in the opinion of the author. Key engineering contributions have been made with the help of this work wherein the core LOFT has been evaluated as part several government research and development programs including the Air Force Research Lab's Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) Enterprise to the Edge (CETE), Army Research Lab's Advanced Video Activity Analytics (AVAA) and a proposed fine grained distributed computing architecture on the cloud for processing at the edge. A simplified version of LOFT was developed for tracking objects in standard videos and entered in the Visual Object Tracking (VOT) Challenge competition that is held in conjunction with the leading computer vision conferences. LOFT incorporating the proposed appearance adaptation module produces significantly better tracking results in aerial WAMI of urban scenes

    Multi-scale directional vesselness stamping based segmentation for polyps from wireless capsule endoscopy

    No full text
    <p>Poster from:</p> <p>V. B. S. Prasath, R. Pelapur and K. Palaniappan. <em>Multi-scale directional vesselness stamping based segmentation for polyps from wireless capsule endoscopy</em>. 36th Annual International Conference EMBS (IEEE EMBS/EMBC 2014), Chicago, USA, Aug 26-30, 2014. Late breaking research poster.</p

    Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    No full text
    <p>We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images.</p> <p>We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation.</p> <p>The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images.</p> <p>Based on the segmentation and a network extraction module, quantitative analysis was carried out.</p> <p>The applications are in microvascular network remodeling in Dura Mater of ovariectomized pigs and mice, and quantitative analysis provides a guideline of vascular stability.</p

    Robust Orientation and Appearance Adaptation for Wide-Area Large Format Video Object Tracking

    No full text
    <p>Poster Presentation at Advanced Video and Signal-Based Surveillance (IEEE- AVSS - Beijing '12)</p> <p>Visual feature-based tracking systems need to adapt to variations in the appearance of an object and in the scene for robust performance. Though these variations may be small for short time steps, they can accumulate over time and deteriorate the quality of the matching process across longer intervals. Tracking in aerial imagery can be challenging as viewing geometry, calibration inaccuracies, complex flight paths and background changes combined with illumination changes, and occlusions can result in rapid appearance change of objects.<br>Balancing appearance adaptation with stability to avoid tracking non-target objects can lead to longer tracks which is an indicator of tracker robustness.<br><br>The approach described in this poster can handle affine changes such as rotation by explicit orientation estimation, scale changes by using a multiscale Hessian edge detector and drift correction by using segmentation.<br><br>We propose an appearance update approach that handles the 'drifting' problem using this adaptive scheme within a tracking environment that is comprised of a rich feature set and a motion model.</p

    Multiscale Tikhonov-Total Variation Image Restoration Using Spatially Varying Edge Coherence Exponent

    Get PDF
    Edge preserving regularization using partial differential equation (PDE)-based methods although extensively studied and widely used for image restoration, still have limitations in adapting to local structures. We propose a spatially adaptive multiscale variable exponent-based anisotropic variational PDE method that overcomes current shortcomings, such as over smoothing and staircasing artifacts, while still retaining and enhancing edge structures across scale. Our innovative model automatically balances between Tikhonov and total variation (TV) regularization effects using scene content information by incorporating a spatially varying edge coherence exponent map constructed using the eigenvalues of the filtered structure tensor. The multiscale exponent model we develop leads to a novel restoration method that preserves edges better and provides selective denoising without generating artifacts for both additive and multiplicative noise models. Mathematical analysis of our proposed method in variable exponent space establishes the existence of a minimizer and its properties. The discretization method we use satisfies the maximum-minimum principle which guarantees that artificial edge regions are not created. Extensive experimental results using synthetic, and natural images indicate that the proposed multiscale Tikhonov-TV (MTTV) and dynamical MTTV methods perform better than many contemporary denoising algorithms in terms of several metrics, including signal-to-noise ratio improvement and structure preservation. Promising extensions to handle multiplicative noise models and multichannel imagery are also discussed

    Multiscale anisotropic tensor filtering of fluorescence microscopy for denoising vasculature

    No full text
    <p>This poster is related to the following paper:</p> <p>V. B. S. Prasath, R. Pelapur, O. Glinskii, V. Glinskii, V. Huxley, K. Palaniappan. <strong>Multiscale Tensor Anisotropic Filtering of Fluorescence Microscopy for Denoising Microvasculature</strong>. IEEE International Symposium on Biomedical Imaging (ISBI), New York, USA.</p> <p> </p

    Computerized microvasculature dura mater structure extraction and analysis of fluorescence microscopy imagery

    No full text
    <br>Poster from: <p>V. B. S. Prasath, R. Pelapur, Y. M. Kassim, S. Meena, A. Palaniappan, U. Sampathkumar, O. Glinskii, V. Glinskii, V. Huxley, K. Palaniappan. <em>Computerized Microvasculature Dura Mater Structure Extraction and Analysis of Fluorescence Microscopy Imagery</em>. Missouri Informatics Symposium, April 2016.</p
    corecore